Goto

Collaborating Authors

 learning compositional rule


Learning Compositional Rules via Neural Program Synthesis

Neural Information Processing Systems

Many aspects of human reasoning, including language, require learning rules from very little data. Humans can do this, often learning systematic rules from very few examples, and combining these rules to form compositional rule-based systems. Current neural architectures, on the other hand, often fail to generalize in a compositional manner, especially when evaluated in ways that vary systematically from training. In this work, we present a neuro-symbolic model which learns entire rule systems from a small set of examples. Instead of directly predicting outputs from inputs, we train our model to induce the explicit system of rules governing a set of previously seen examples, drawing upon techniques from the neural program synthesis literature. Our rule-synthesis approach outperforms neural meta-learning techniques in three domains: an artificial instruction-learning domain used to evaluate human learning, the SCAN challenge datasets, and learning rule-based translations of number words into integers for a wide range of human languages.


A Supplementary Material Learning Compositional Rules via Neural Program Synthesis

Neural Information Processing Systems

All models were implemented in PyTorch. For all experiments, we report standard error below. Primitive rules map a word to a color (e.g. In a higher-order rule, the left hand side can be one or two variables and a word, and the right hand side can be any sequence of bracketed forms of those variables. Figure A.2 shows several example training grammars sampled from the meta-grammar.


Learning Compositional Rules via Neural Program Synthesis

Neural Information Processing Systems

Many aspects of human reasoning, including language, require learning rules from very little data. Humans can do this, often learning systematic rules from very few examples, and combining these rules to form compositional rule-based systems. Current neural architectures, on the other hand, often fail to generalize in a compositional manner, especially when evaluated in ways that vary systematically from training. In this work, we present a neuro-symbolic model which learns entire rule systems from a small set of examples. Instead of directly predicting outputs from inputs, we train our model to induce the explicit system of rules governing a set of previously seen examples, drawing upon techniques from the neural program synthesis literature.


Review for NeurIPS paper: Learning Compositional Rules via Neural Program Synthesis

Neural Information Processing Systems

The paper claims that the model "learn[s] entire rule systems from a small set of examples". I'm not convinced that this is the case in this work and neither in the previous work which this one extends (i.e. Both methods heavily rely on the supporting set and the specific neural attention architecture of the encoder and decoder which allow for the replacement of individual tokens. This allows the model to exploit a certain pattern in the support set e.g. "a b c - a c a" by replacing the "a" and "b" on-the-fly and execute the abstract rule given by the supporting set.


Review for NeurIPS paper: Learning Compositional Rules via Neural Program Synthesis

Neural Information Processing Systems

Learning compositional rules is an important research direction. The proposed method achieves 100% accuracy on different train/test splits of SCAN. My main concern on this work is it seems to be too specific for SCAN, as pointed out by the reviewers.


Learning Compositional Rules via Neural Program Synthesis

Neural Information Processing Systems

Many aspects of human reasoning, including language, require learning rules from very little data. Humans can do this, often learning systematic rules from very few examples, and combining these rules to form compositional rule-based systems. Current neural architectures, on the other hand, often fail to generalize in a compositional manner, especially when evaluated in ways that vary systematically from training. In this work, we present a neuro-symbolic model which learns entire rule systems from a small set of examples. Instead of directly predicting outputs from inputs, we train our model to induce the explicit system of rules governing a set of previously seen examples, drawing upon techniques from the neural program synthesis literature.


Learning Compositional Rules via Neural Program Synthesis

Nye, Maxwell I., Solar-Lezama, Armando, Tenenbaum, Joshua B., Lake, Brenden M.

arXiv.org Artificial Intelligence

Many aspects of human reasoning, including language, require learning rules from very little data. Humans can do this, often learning systematic rules from very few examples, and combining these rules to form compositional rule-based systems. Current neural architectures, on the other hand, often fail to generalize in a compositional manner, especially when evaluated in ways that vary systematically from training. In this work, we present a neuro-symbolic model which learns entire rule systems from a small set of examples. Instead of directly predicting outputs from inputs, we train our model to induce the explicit system of rules governing a set of previously seen examples, drawing upon techniques from the neural program synthesis literature. Our rule-synthesis approach outperforms neural meta-learning techniques in three domains: an artificial instruction-learning domain used to evaluate human learning, the SCAN challenge datasets, and learning rule-based translations of number words into integers for a wide range of human languages.